387 research outputs found

    New Stability Criterion for Discrete-Time Genetic Regulatory Networks with Time-Varying Delays and Stochastic Disturbances

    Get PDF
    We propose an improved stability condition for a class of discrete-time genetic regulatory networks (GRNs) with interval time-varying delays and stochastic disturbances. By choosing an augmented novel Lyapunov-Krasovskii functional which contains some triple summation terms, a less conservative sufficient condition is obtained in terms of linear matrix inequalities (LMIs) by using the combination of the lower bound lemma, the discrete-time Jensen inequality, and the free-weighting matrix method. It is shown that the proposed results can be readily solved by using the Matlab software. Finally, two numerical examples are provided to illustrate the effectiveness and advantages of the theoretical results

    Bias and Extrapolation in Markovian Linear Stochastic Approximation with Constant Stepsizes

    Full text link
    We consider Linear Stochastic Approximation (LSA) with a constant stepsize and Markovian data. Viewing the joint process of the data and LSA iterate as a time-homogeneous Markov chain, we prove its convergence to a unique limiting and stationary distribution in Wasserstein distance and establish non-asymptotic, geometric convergence rates. Furthermore, we show that the bias vector of this limit admits an infinite series expansion with respect to the stepsize. Consequently, the bias is proportional to the stepsize up to higher order terms. This result stands in contrast with LSA under i.i.d. data, for which the bias vanishes. In the reversible chain setting, we provide a general characterization of the relationship between the bias and the mixing time of the Markovian data, establishing that they are roughly proportional to each other. While Polyak-Ruppert tail-averaging reduces the variance of the LSA iterates, it does not affect the bias. The above characterization allows us to show that the bias can be reduced using Richardson-Romberg extrapolation with mβ‰₯2m\ge 2 stepsizes, which eliminates the mβˆ’1m-1 leading terms in the bias expansion. This extrapolation scheme leads to an exponentially smaller bias and an improved mean squared error, both in theory and empirically. Our results immediately apply to the Temporal Difference learning algorithm with linear function approximation, Markovian data, and constant stepsizes
    • …
    corecore